memory cache

美 [ˈmeməri kæʃ]英 [ˈmeməri kæʃ]
  • 网络内存缓存;缓存;内存高速缓存;存储器高速缓存
memory cachememory cache

noun

1
(computer science) RAM memory that is set aside as a specialized buffer storage that is continually updated;used to optimize data transfers between system elements with different characteristics
Synonym: cache

数据来源:WordNet

  1. It works by retrieving the data row information that the reader transaction needs from the DB2 logs-usually from the log 's memory cache , which is very fast .

    它采用的方法是从DB2日志(常常是从非常快速的日志内存缓存)中获取读者事务需要的数据行。

  2. Research and Application of Memory Cache Technology in the Major Website

    内存缓存技术在门户网站开发中的应用研究

  3. Method of Promoting Performance of AAA Server Using Memory Cache

    使用高速缓存提高AAA服务器性能的方法

  4. A Parallel Memory Cache System and Its Implementation

    一种并行内存缓冲系统及其实现技术

  5. This improves the performance characteristics of the system memory cache , and , in turn , the application itself .

    这有助于改进系统内存缓存的性能特征,从而提高应用程序本身的性能。

  6. This essay discuss an innovative way to make use of the memory cache to resolve the problem .

    论文提出了使用内存缓存的方式对该问题进行解决。

  7. If the value is0 , the memory cache will not be created .

    如果该值为0,那么将不会创建内存高速缓存。

  8. Furthermore , we made three tests on the memory cache system , and assessed the parameters of existing experience .

    随后对内存缓存的实现进行了试用,并对现有的经验参数进行评估。

  9. When the memory cache region becomes full , LRU removes the least recently used cached data first .

    当内存缓存区域满时,LRU会首先删除最近最少使用的缓存数据。

  10. This paper firstly studied memory cache feature and described the messaging system design ideas . Then do the process to realize the design idea .

    本文首先研究了内存缓存功能在短信系统上的设计思路,并根据其进行了短信内存缓存功能设计实现。

  11. MemCacheSize sets the number of elements that can be held in a memory cache at any one time .

    MemCacheSize设置一次能够保存在内存高速缓存中的元素数目。

  12. This algorithm can improve hit rate on backend servers ' main memory cache , thus increase the performance of the whole cluster system .

    该算法可提高后端服务器的主存Cache命中率,从而提高了整个集群系统的性能。

  13. Adaptive-level Memory Cache Technique on Proxy Server

    代理服务器上的自适应存储缓冲技术

  14. All modern CPUs must utilize local memory cache to minimize latency of fetching instructions and data from memory .

    所有现代的CPU必须使用本地存储的缓存,将获取指令和数据的延迟降到最低。

  15. This enables you to store and later recover entire objects from the memory cache , instead of reconstructing them by hand within your application .

    这就让您可以存储并随后从内存存储恢复全部对象,而不是在您的应用程序内手动重构它们。

  16. Eaccelerator.shm_size defines the size of the shared memory cache , which is where the compiled scripts are stored .

    shmsize定义共享高速缓存的大小,编译后的脚本就存储在这里。

  17. You can now configure a private memory cache for every CPUVP to decrease the time of server memory allocation on large multiprocessor computers .

    现在可以为每个CPUVP配置一个私有内存缓存,以减少大型多处理器计算机上的服务器内存分配时间。

  18. It can be configured to offload cache to disk if the memory cache is full ( via the disk offload function ) .

    还可以设置如果内存高速缓存已满,就将缓存转存到磁盘(通过磁盘转存功能)。

  19. If files can be served from memory cache , then that decreases the average service time , and hence increases the number of connections per second the server can handle .

    如果可以从内存缓存中获取文件,就可以降低平均服务时间,因此增加服务器每秒能够处理的连接数。

  20. Memcached is an open source project designed to make use of the spare RAM in many servers to act as a memory cache for frequently accessed pieces of information .

    memcached是一个开源项目,旨在利用多个服务器内的多余RAM来充当一个可存放经常被访问信息的内存缓存。

  21. This paper proposes an adaptive-level memory cache strategy by which people can store the accessed Web resources adaptively , release the pressure of Web server and reduce the document retrieval latency .

    该文提出的自适应存储缓冲策略应用于代理服务器上,通过自适应地存储Web资源,缩短了文档的检索时间。

  22. AAA server supporting radius protocol is widely applied in telecommunication industry . In this paper , a memory cache technology is introduced , which can promote AAA server 's performance with 2 ~ 3 order of magnitude .

    支持Radius协议的AAA服务器在电信行业得到广泛应用,该文介绍了一种高速缓存技术,将AAA服务器的性能提高了2~3个数量级。

  23. Discusses the popular word-based inverted file index model and the traits of index data , presents a run-length coding index compression algorithm , utilizing this compression coding , researches fast index creation process based on memory cache .

    针对目前比较流行的基于词的倒排文档索引模型,结合全文检索数据的特点,提出了变长编码的索引压缩算法。利用该压缩编码,研究了基于内存缓存的快速创建索引的流程。

  24. Interface Design of Main Memory for Cache Experiment

    Cache实验中的主存储器接口设计

  25. A new memory encryption cache ( MEC ) structure is proposed .

    提出了一种新的存储器加密Cache(MEC)结构。

  26. Instance resources include memory , cache , process , sessions , thresholds , and storage resources .

    实例资源包括内存、缓存、进程、会话、阈值和存储资源。

  27. In order to ease up the gap between CPU and memory , cache is introduced between them .

    为了缓和CPU与存储器之间的速度差距,在计算机系统的CPU与主存之间引入了cache。

  28. Three levels of data storage ( database storage , file storage , memory / cache storage ) will be involved in this platform .

    数据存储将用到数据库存储、文件存储、内存(缓存)存储。

  29. Using memory as cache of disk data and optimizing I / O requests serving is an effective method to alleviate I / O bottleneck .

    较好地利用内存作为缓存,并优化磁盘设备的请求处理,是缓解系统I/O瓶颈的有效途径。

  30. In most of multimedia SOC ( systems-on-chip ), in order to maintain the real-time requirement , application program and data are stored in on-chip memory or Cache , it has the advantage of convenient execution and better performance .

    最后,本文依据多媒体SoC(System-on-chip)实时操作的要求,应用程序和数据尽量存放在片上存贮器或高速缓存中,以加快处理速度。